The Ethics of Artificial Intelligence

By David Higgins

Artificial Intelligence (AI) is one of the most exciting yet controversial parts of our lives and the ethical challenges which accompany it are not only extremely complex, but also very interesting. For those who don’t know, artificial intelligence is simply defined as the intelligence demonstrated by machines. For a long time we have known machines are our superior at almost every physical task (for example lifting and moving objects). However, recently, computer systems have shown real dominance in very intelligent tasks such as the board games chess and go (where in both, the human world champion has been toppled). AI is forcing us to answer some of the most disputed questions of our time and if we do not answer them soon enough, machines will answer them for us. In this project we explore five areas with AI ethical challenges.



Our Future with Artificial General Intelligence

Perhaps the ultimate goal of AI is to achieve ‘AGI’ or ‘Artificial General Intelligence.’ How does this differ from normal AI? An AGI is an AI with the ability to learn for itself and self recursively improve its own model. If it is able to reprogram itself then, theoretically, it should lead to an intelligence explosion and hopefully give us answers to some life’s biggest questions. As Irving J Good put it in 1965,

“The first ultraintelligent machine is the last invention that man need ever make, provided the machine is docile enough to tell us how to keep it under control”

A massive amount of emphasis must be placed on the second part of that quote. Designing an AGI is one thing, but controlling it (a dramatically more intelligent machine) is a whole separate problem. The hope is therefore to create an intelligent machine which has the same aligned goals as humans. The difficulty is that we humans do not even have a perfect ethical framework and set of goals to give to our machines, therefore this is the first important problem we must overcome.

Once this problem is solved, how does one actually instil these ethical principles into a superintelligent system? The most obvious answer is to hardcode the principles into the AI’s ‘system.’ However there is a very interesting dilemma to this method and we see it in no other than humans ourselves. We too have had our own ultimate goals hardwired into us through the phenomenon of genes. Our ultimate goals, since the beginning of evolution, have been to survive and reproduce. To aid us with this process we have developed incentivised feelings and emotions. Hunger and thirst mean we don’t starve. Pain and fear deter us from dangerous situations. Finally love and lust help us to reproduce. Despite this, humans have found ways to defy our ultimate goals in order to satisfy our feelings and emotions. We drink and eat substances with no nutritional value. We love the thrill of simulated danger in a theme park. Most noticeably, we use birth control to satisfy our sexual needs without biological imperative to reproduce. So what does this ultimately show? Just because we can embed an AI machine with our own ethics and goals, by no means will it stick to them. I now welcome you to our first major ethical concern.

Another interesting thought is no matter what goal we instil into an AI, it could still be dangerous to humans. For any machine or being to achieve a goal it must create sub-goals. For example just to drink a cup of team one must find the cup, lift it, and bring it to the mouth. A very likely sub-goal for any system is survival since it will never be able to achieve its ultimate goal if it is terminated. Another sub-goal may be resource acquisition to aid it in its mission. However, should a system view humans to be a danger to it, (challenging its survival goal), or an obstacle, (challenging its resource acquisition goal), it would likely deem eliminating humans as a worthwhile task. We could make an AGI with the ultimate goal to create as many paper aeroplanes as possible and inadvertently it could cause the end of mankind.



The Driverless Car Problem

Self-driving cars may revolutionise the way we experience transportation however before that we must answer some very difficult moral and ethical questions. Firstly, imagine a scenario where the car knew it was about to crash – it just had to deal with crash in the most ethical way possible. Would the car sacrifice those in the car or those on the road? Surely nobody would buy a car which would prioritise against them? However what might the car do if there is a single driver, yet the car will crash into ten other people crossing the road? Then would it be justified to sacrifice the driver?

We see great similarities to the famous ‘Trolley Problem.’ This problem is as follows:

a) Imagine you are stood next to a train track. A train is coming and you see five people on the train track but you don’t have time to warn them. Instead there is a switch which will move the train onto another track. The only problem is that there is one person on the other track who, again, would not hear your warning. Would you flip the switch?

b) Now imagine a slightly different version of the problem. There is still a train heading towards five people and you do not have time to warn them. However this time, you are stood on top of a bridge above the track with a large man. If you were to push the man off the bridge, he would obstruct the train just enough to save the five lives. Would you push the man?

The interesting thing is most people will say ‘yes’ to part a) flipping the switch and sacrificing one life for five others. However, most people will say ‘no’ to part b) even though it is abstractly the same problem (sacrifice one life to save five). The Trolley Problem gives interesting insight as to whether moral decisions are based simply on outcomes or the manner in which they are achieved.

Another dilemma is if the self-driving car were to crash, then who would be at fault? In 2018, a woman became victim to the first driverless car fatality. Uber was running a test in the state of Arizona at night. When the woman, a pedestrian, came into view of the car, it could not compute what she was or where she was going and a fatal crash occurred. The self-driving car had a safety operator, however they were reported to have been watching TV in the car when the accident happened. In this instance, the safety operator definitely has a high degree of liability. However who else is to blame? In this case the blame was actually divided between the safety operator, Uber, the car itself, the victim and even the state of Arizona. The incident did, however, spark a series of studies into the liability when an accident involves a driverless car. Some researchers have proposed interesting game-theory models which regulate drivers, manufacturers, the car itself and the law makers. The goal is to create an optimal model whereby no parties can take advantage of another.

It is important to note that whilst self-driving cars are ethically challenging, they also will greatly improve the safety of our roads. The US Department of Transport estimated that up to 94% of crashes could be stopped by autonomous vehicles. This is because they remove human error, which is the biggest cause of problems in almost all sectors. Whilst it may sound intimidating now, the more we hand over control to technology, the safer our lives become. We see this is the aviation industry where as planes and helicopters have become more autonomous, less crashes occur.



AI for Weapons of Mass Destruction

Again we find ourselves pondering if AI will cause the extinction of humanity. Hopefully not and to ensure this, we must think very carefully about the applications of AI, especially in weapons. These weapons have been named Lethal Autonomous Weapons or ‘LAWs.’ What might a LAW look like? The biggest idea at the moment is a ‘killer drone’ which has facial recognition cameras to target individuals or groups of people and eliminate them.

How dangerous could this be? Imagine if the technology fell into the wrong hands. Not only would it be extremely dangerous, it could also be very cheap and easy to produce. The technology would not be very difficult to make and could take as little as two years to produce. In the example of a drone, parts could be bought from a local shop, unlike guns and bombs. The robots would have no fear of death and would therefore do whatever it takes to neutralise a target. There is a fear that the technology could be used by extremists for ethnic cleansing since it would be very easy for an AI to target specific groups. LAWs have been described as “the third revolution in warfare” following gunpowder and nuclear weaponry.

So why on earth is producing the technology still in consideration? Unfortunately military powers may want LAWs for some of the following reasons:

To date, all United Nations talks on a LAW treaty have ended in stalemate since the process for a treaty is very tiresome. Moreover, some countries such as the US, UK, South Korea and Australia have opposed the treaty. They believe that it is too early to discard them and there may be benefits of LAWs still to be discovered.

The majority of AI researchers strictly oppose the technology. Understandably, they do not want the reputation of their work and industry to become tarnished by the needs of the military. Consequently, over 3,000 leading researchers have signed an open letter which aims to protect the reputation of AI. Researchers also fear that such technology could create laws which inhibit AI progress in other areas that are beneficial to life. What do you think? Could LAWs be justified? These are the conversations we must be having whilst we still can.



Racism and Bias in AI Systems

An algorithm is only a product of the thoughts and views imposed on it from both its creator and the environment it exists in. This isn’t to say that humans purposely program bias into their machines, however this does happen. It is incredibly important for a programmer to acknowledge and appreciate how their thoughts and views may differ from others, especially if their program will be used by many. Bias can also occur with systems requiring data input. Often the training data for machine learning programs can be unintentionally biased. How might this happen? The location, date and people involved in data collection can very easily create bias. For example, imagine a machine learning algorithm which is used to detect whether or not something is a shoe. If the data fed into it was primarily trainers, boots and flip-flops, it would never guess a high-heel shoe is also a shoe since it may never have been exposed to one.

The idea of bias in our machines and algorithms has lead to the accidental development of racist machines. In 2016 an algorithm named COMPAS (Correctional Offending Management Profiling for Alternative Sanctions) used by the US court system was found to be biased against black inmates. The program labelled black inmates as almost twice as likely to reoffend over white inmates (45% to 24% respectively). Ironically, the reason for the creation of the algorithm was to help reduce bias in the courthouse after the US court system had been criticised for racial bias. The private company who developed the algorithm refused to release the inner workings of the program because of commercial sensitivity.

In the same year, Microsoft also released an AI chatbot system called ‘TayTweets,’ which was a twitter account aiming to mimic the language and tweeting styles of an 18-24 year old. What Microsoft did not anticipate is that the chatbot would be exposed to many Internet trolls which targetted it with very offensive tweets. Since Tay was a smart system which learned from its surroundings, it quickly picked up these messages and before long it had tweeted its own series of racist, homophobic, sexist, anti-semitic and transphobic tweets. After 16 hours, Microsoft was forced to disable the account. This acts as a good warning to parents with the impact of social media amongst easy to manipulate minds.

Perhaps both examples speak to a bigger picture of how powerful human influence really is. The COMPAS algorithm had bias embedded into it from its human developers. TayTweets had bias embedded into it from the human environment it was exposed to and learned from. If we want to develop a truly unbiased system we need to find ways to separate ourselves from it. Again, we see how our lack of an ethical framework affects our machines which ultimately just do exactly what we tell them.



The Rights of Robots

‘Consciousness’, ‘intelligence’ and ‘life’ are three incredibly hard to define terms that many leading AI researchers still dispute. For the purpose of this project, we’ll define consciousness as ‘being aware of oneself and surroundings.’ Currently we feel no guilt ordering a robot to do our commands because we know it has no consciousness. However should a robot experience thoughts, feelings and pain, would we still feel no guilt? If your answer is yes then please consider how different this is to the ideology behind slavery.

Some may argue that a machine could never become conscious, since we have simply programmed it. However from a physics perspective, everything in our universe is made up from different arrangements of quarks. What arrangements of quarks define consciousness or is there something more? As Edward R Harrison put it,

“Given enough time, Hydrogen starts to wonder where it came from, and where it is going.”

We can actually draw quite a lot of parallels between machines and humans. Whilst we are carbon-based, machines are silicon-based (a close chemical relative). Machines contain electronic programming whilst we contain genetic programming. Finally machines are made of mechanical properties whilst we are made from organic properties, but both use electrical impulses to function. Is it therefore so unrealistic to consider the possibility of machine consciousness?

Now let’s consider a scenario in which robots do have consciousness. What, if any, rights are given to the machines? We saw in the earlier example of the self-driving car accident, the car was held some-what accountable for the crash. If a machine can take blame, then surely it should be able to receive a reward? If machines can feel pain and suffering then should they be awarded human rights? If not, are we being discriminatory?

Again, the importance of ethics in our machines will be hugely important if we were to ever award them with human rights. This conversation is giving us a better understanding of what really makes us human. A parent may desire their child to grow up and be a doctor. However, if the child wants to, it can become a lawyer and it has the right to. Therefore what is to say that machines shouldn’t be allowed to rebel against their initial purpose if they are sentient beings? In this case, would having intelligent robots actually help us at all? Nevertheless, keep an eye on your toaster.

As we analyse some of the great dangers and potential problems with AI, it really is important to appreciate the opportunities presented by it. In a recent study, an AI diagnostics tool outperformed some radiologists at detecting breast cancer. AI in natural language processing and speech synthesis means we have amazing virtual assistants such as Siri and Alexa. Machine leaning algorithms used by Uber and Lyft mean people are able to travel around their city as quickly as possible. The technology is also helping us to understand much more about ourselves and life and there are fantastic crossovers into other subjects such as neuroscience and psychology. There are undoubtedly risks, however if we can answer some of life’s most challenging questions, AI will certainly provide us with an array of opportunities.



Bibliography

Our Future with Artificial General Intelligence

The Driverless Car Problem

AI for Weapons of Mass Destruction

Racism and Bias in AI Systems

The Rights of Robots